Alternative Global–Local Shrinkage Priors Using Hypergeometric–Beta Mixtures

نویسندگان

  • Nicholas G. Polson
  • James G. Scott
چکیده

This paper introduces an approach to estimation in possibly sparse data sets using shrinkage priors based upon the class of hypergeometric-beta distributions. These widely applicable priors turn out to be a four-parameter generalization of the beta family, and are pseudo-conjugate: they cannot themselves be expressed in closed form, but they do yield tractable moments and marginal likelihoods when used as priors for the mean of a normal distribution. These priors are useful in situations where standard priors are inappropriate or ill-behaved. Non-Bayesians will find these priors useful for generating easily computable shrinkage estimators that have excellent risk properties. Bayesians will find them useful for generating computationally tractable priors for a variance parameter. We illustrate the use of these priors on a variety of global and local shrinkage problems, and we prove a theorem that characterizes their risk proprieties when used for estimation of a normal mean under a quadratic loss function.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Mixtures of g-Priors in Generalized Linear Models

Mixtures of Zellner’s g-priors have been studied extensively in linear models and have been shown to have numerous desirable properties for Bayesian variable selection and model averaging. Several extensions of g-priors to Generalized Linear Models (GLMs) have been proposed in the literature; however, the choice of prior distribution of g and resulting properties for inference have received con...

متن کامل

Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction

We study the classic problem of choosing a prior distribution for a location parameter β = (β1, . . . , βp) as p grows large. First, we study the standard “global-local shrinkage” approach, based on scale mixtures of normals. Two theorems are presented which characterize certain desirable properties of shrinkage priors for sparse problems. Next, we review some recent results showing how Lévy pr...

متن کامل

Generalized Beta Mixtures of Gaussians

In recent years, a rich variety of shrinkage priors have been proposed that have great promise in addressing massive regression problems. In general, these new priors can be expressed as scale mixtures of normals, but have more complex forms and better properties than traditional Cauchy and double exponential priors. We first propose a new class of normal scale mixtures through a novel generali...

متن کامل

Bayesian shrinkage

Penalized regression methods, such as L1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting computational problems in high dimension...

متن کامل

Local Shrinkage Rules, Lévy Processes, and Regularized Regression

We use Lévy processes to generate joint prior distributions, and therefore penalty functions, for a location parameter β = (β1, . . . ,βp) as p grows large. This generalizes the class of local-global shrinkage rules based on scale mixtures of normals, illuminates new connections among disparate methods, and leads to new results for computing posterior means and modes under a wide class of prior...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009